37 research outputs found

    Detection and location of domestic waste for planning its collection using an autonomous robot

    Get PDF
    Paper submitted to the 8th International Conference on Control, Automation and Robotics (ICCAR), Xiamen, China, April 8-10, 2022.This paper presents an approach of a detection and location system for waste recognition in outdoor environments that can be usable on an autonomous robot for garbage collection. It is composed of a camera and a LiDAR. For the detection task, some YOLO models were trained and tested for classification of waste by using a own dataset acquired from the camera. The image coordinates predicted by the best detector are used in order to compute the location relative to the camera. Then, we used the LiDAR to get a global waste location relative to the robot, transforming the coordinates of the center of each trash instance. Our detection approach was tested in outdoor environments obtaining a [email protected] around 0.99 and a [email protected] over 0.84, and an average time of detection less than 40 ms., being able to make it in real time. The location method was also tested in presence of objects at a maximum distance of 8 m., obtaining an average error smaller than 0.25 m.This research was funded by Spanish Government through the project RTI2018-094279-B-I00. Besides, computer facilities were provided by Valencian Government and FEDER through the IDIFEFER/2020/003

    Assistance Robotics and Biosensors 2019

    Get PDF
    This Special Issue is focused on breakthrough developments in the field of assistive and rehabilitation robotics. The selected contributions include current scientific progress from biomedical signal processing and cover applications to myoelectric prostheses, lower-limb and upper-limb exoskeletons and assistive robotics

    Assistance Robotics and Biosensors

    Get PDF
    This Special Issue is focused on breakthrough developments in the field of biosensors and current scientific progress in biomedical signal processing. The papers address innovative solutions in assistance robotics based on bioelectrical signals, including: Affordable biosensor technology, affordable assistive-robotics devices, new techniques in myoelectric control and advances in brain–machine interfacing

    Using a RGB-D camera for 6DoF SLAM

    Get PDF
    This paper presents a method for fast calculation of the egomotion done by a robot using visual features. The method is part of a complete system for automatic map building and Simultaneous Localization and Mapping (SLAM). The method uses optical flow in order to determine if the robot has done a movement. If so, some visual features which do not accomplish several criteria (like intersection, unicity, etc,) are deleted, and then the egomotion is calculated. We use a state-of-the-art algorithm (TORO) in order to rectify the map and solve the SLAM problem. The proposed method provides better efficiency that other current methods.These authors want to express their gratitude to Spanish Ministry of Science and Technology (MYCIT) and the Research and Innovation Vice-president Office of the University of Alicante for their financial support through the projects DPI2009-07144 and GRE10-16, respectively

    Vision and Tactile Robotic System to Grasp Litter in Outdoor Environments

    Get PDF
    The accumulation of litter is increasing in many places and is consequently becoming a problem that must be dealt with. In this paper, we present a manipulator robotic system to collect litter in outdoor environments. This system has three functionalities. Firstly, it uses colour images to detect and recognise litter comprising different materials. Secondly, depth data are combined with pixels of waste objects to compute a 3D location and segment three-dimensional point clouds of the litter items in the scene. The grasp in 3 Degrees of Freedom (DoFs) is then estimated for a robot arm with a gripper for the segmented cloud of each instance of waste. Finally, two tactile-based algorithms are implemented and then employed in order to provide the gripper with a sense of touch. This work uses two low-cost visual-based tactile sensors at the fingertips. One of them addresses the detection of contact (which is obtained from tactile images) between the gripper and solid waste, while another has been designed to detect slippage in order to prevent the objects grasped from falling. Our proposal was successfully tested by carrying out extensive experimentation with different objects varying in size, texture, geometry and materials in different outdoor environments (a tiled pavement, a surface of stone/soil, and grass). Our system achieved an average score of 94% for the detection and Collection Success Rate (CSR) as regards its overall performance, and of 80% for the collection of items of litter at the first attempt.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project. The computer facilities were provided through the IDIFEFER/2020/003 project

    Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

    Get PDF
    In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3 - 6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ± 0.0544 m.Research work was funded by the Valencian Regional Government and FEDER through the PROMETEO/2021/075 project and the Spanish Government through the Formación del Personal Investigador [Research Staff Formation (FPI)] under Grant PRE2019-088069. The computer facilities were provided through the IDIFEFER/2020/003 project

    Virtualization of Robotic Hands Using Mobile Devices

    Get PDF
    This article presents a multiplatform application for the tele-operation of a robot hand using virtualization in Unity 3D. This approach grants usability to users that need to control a robotic hand, allowing supervision in a collaborative way. This paper focuses on a user application designed for the 3D virtualization of a robotic hand and the tele-operation architecture. The designed system allows for the simulation of any robotic hand. It has been tested with the virtualization of the four-fingered Allegro Hand of SimLab with 16 degrees of freedom, and the Shadow hand with 24 degrees of freedom. The system allows for the control of the position of each finger by means of joint and Cartesian co-ordinates. All user control interfaces are designed using Unity 3D, such that a multiplatform philosophy is achieved. The server side allows the user application to connect to a ROS (Robot Operating System) server through a TCP/IP socket, to control a real hand or to share a simulation of it among several users. If a real robot hand is used, real-time control and feedback of all the joints of the hand is communicated to the set of users. Finally, the system has been tested with a set of users with satisfactory results.This research was funded by Ministerio de Ciencia, Innovación y Universidades grant number RTI2018-094279-B-100

    Teaching image and video processing with a practical cases-based methodology at the University of Alicante

    Get PDF
    At universities, the courses about computer vision and image processing have been usually taught using a classical methodology based on textbooks, notes and practical exercises which are explained on a blackboard, although sometimes the teachers use video projector and PC presentations for showing slides with static contents; mainly slides with text, figures and photos. However, this teaching model is focused in teacher instead of students and consequently, this approach is not effective when the teacher seeks to achieve cognitive objectives which involve a student's critical thinking. That is, student should be able to develop skills such as acquired knowledge about how filters, operators, methods and techniques for image processing are implemented by computer vision software libraries. Besides, student should be motived to use that background in order to solve practical and real problems in which it is advisable both a study and an analysis of features based on image processing. This manuscript will present the development, implementation, and assessment of a practical-cases-based engineering course, more specifically in image and video processing course at the University of Alicante. All course lectures and hands-on laboratory activities have the main goal that student acquires not only image-specific technical skills but also general knowledge in data analysis to discover phenomena in pixel regions of images and video frames. This way, our teaching-learning process accomplishes both knowledge assimilation and skill developments using a strategy continuous evaluation. In addition, this work will present a learning analysis based on a comparison of student results by considering the prior knowledge (at the beginning) and acquired knowledge as well as experience (at the end of the course) in issues related to specific subject. The image and video processing course is a compulsory subject of Degree in Multimedia Engineering at the University of Alicante, in Spain. It is taught during 14-week period of the third academic year and it also has a workload of 6 ECTS (European Credit Transfer and Accumulation System) distributed as 30 hours of computer and laboratory practices and 30 hours of theory class. Nowadays, 80 students are enrolled in this course

    A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography

    Get PDF
    This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute pre-grasping poses of the robotic system based on the analysis of tridimensional object features. Then, the human operator can correct the pre-grasping pose of the robot using surface electromyographic signals from the forearm during wrist flexion and extension. Weak wrist flexions and extensions allow a fine adjustment of the robotic system to grasp the object and finally, when the operator considers that the grasping position is optimal, a strong flexion is performed to initiate the grasping of the object. The system has been tested with several subjects to check its performance showing a grasping accuracy of around 95% of the attempted grasps which increases in more than a 13% the grasping accuracy of previous experiments in which electromyographic control was not implemented.This work was funded by the Spanish Government’s Ministry of Economy, Industry and Competitiveness through the DPI2015-68087-R, by the European Commission’s and FEDER funds through the COMMANDIA (SOE2/P1/F0638) action supported by Interreg-V Sudoe and by University of Alicante through project GRE16-20, Control Platform for a Robotic Hand based on Electromyographic Signals

    Estudio y planificación de contenidos, materiales y metodologías docentes según el EEES: Itinerario Ocio Digital (Cuarto Curso de Ingeniería Multimedia)

    Get PDF
    En esta memoria se describe el proyecto realizado para la planificación de contenidos, materiales y metodología docente, y el seguimiento de las asignaturas del itinerario de Creación y Entretenimiento digital del cuarto curso del Grado en Ingeniería Multimedia de la Escuela Politécnica Superior, como continuación del planteamiento realizado en el proyecto de la red de cuarto curso configurada en el año 2012-13 (identificador 2687). En el marco creado por los nuevos estudios dentro del EEES, el proyecto ha tenido como objetivo principal la preparación, coordinación y seguimiento de las asignaturas del citado itinerario, teniendo en cuenta la aplicación de la metodología de Aprendizaje Basado en Proyectos (ABP). A partir de la experiencia adquirida en la planificación de cursos previos, se han elaborado las guías docentes de las asignaturas ajustando las del curso anterior. También se presentan los resultados obtenidos en la primera experiencia llevada a cabo en este curso, expresados en horas de dedicación al proyecto y en grado de satisfacción tanto del alumnado como del profesorado con la metodología ABP. Por último, se ha creado una página web informativa del itinerario explicando todos los aspectos de la metodología, y se han elaborado dos artículos detallando toda la labor realizada
    corecore